
βββββββββββββββββββββββββββββββββββββββββ
MiMo-VL Technical Report
βββββββββββββββββββββββββββββββββββββββββ
I. Introduction
In this report, we share our efforts to build a compact yet powerful VLM, MiMo-VL-7B. MiMo-VL-7B comprises (1) a native resolution ViT encoder that preserves fine-grained visual details, (2) an MLP projector for efficient cross-modal alignment, and (3) our MiMo-7B language model, specifically optimized for complex reasoning tasks.
The development of MiMo-VL-7B involves two sequential training processes: (1) A four-stage pre-training phase, which includes projector warmup, vision-language alignment, general multi-modal pre-training, and long-context Supervised Fine-Tuning (SFT). This phase yields the MiMo-VL-7B-SFT model. (2) A subsequent post-training phase, where we introduce Mixed On-policy Reinforcement Learning (MORL), a novel framework that seamlessly integrates diverse reward signals spanning perception accuracy, visual grounding precision, logical reasoning capabilities, and human/AI preferences. This phase yields the MiMo-VL-7B-RL model.
We open-source MiMo-VL-7B series, including checkpoints of the SFT and RL model. We believe this report along with the models will provide valuable insights to develop powerful reasoning VLMs that benefit the larger community.
π€οΈ During this journey, we find
- Incorporating high-quality, broad-coverage reasoning data from the pre-training stage is crucial for enhancing model performance
- We curate high-quality reasoning data by identifying diverse queries, employing large reasoning models to regenerate responses with long CoT, and applying rejection sampling to ensure quality.
- Rather than treating this as supplementary fine-tuning data, we incorporate substantial volumes of this synthetic reasoning data directly into the later pre-training stages, where extended training yields continued performance improvements without saturation.
- Mixed On-policy Reinforcement Learning further enhances model performance, while achieving stable simultaneous improvements remains challenging
- We apply RL across diverse capabilities, including reasoning, perception, grounding, and human preference alignment, spanning modalities including text, images, and videos. While this hybrid training approach further unlock modelβs potential, interference across data domains remains a challenge.
II. Model Details
Models are available at Huggingface Collections: MiMo-VL and ModelScope Collections: MiMo-VL
Model | Description | Download (HuggingFace) | Download (ModelScope) |
---|---|---|---|
MiMo-VL-7B-SFT | VLM with extraordinary reasoning potential after 4-stage pre-training | π€ XiaomiMiMo/MiMo-VL-7B-SFT | π€οΈ XiaomiMiMo/MiMo-VL-7B-SFT |
MiMo-VL-7B-RL | RL model leapfrogging existing open-source models | π€ XiaomiMiMo/MiMo-VL-7B-RL | π€οΈ XiaomiMiMo/MiMo-VL-7B-RL |
III. Evaluation Results
General Capabilities
In general visual-language understanding, MiMo-VL-7B models achieve state-of-the-art open-source results.
Reasoning Tasks
In multi-modal reasoning, both the SFT and RL models significantly outperform all compared open-source baselines across these benchmarks.
Results marked with * are obtained using our evaluation framework. Tasks with ${\dagger}$ are evaluated by GPT-4o.
GUI Tasks
MiMo-VL-7B-RL possess exceptional GUI understanding and grounding capabilities. As a general-purpose VL model, MiMo-VL achieves comparable or even superior performance to GUI-specialized models.
Elo Rating
With our in-house evaluation dataset and GPT-4o judgments, MiMo-VL-7B-RL achieves the highest Elo rating among all evaluated open-source vision-language models, ranking first across models spanning from 7B to 72B parameters.
IV. Deployment
The MiMo-VL-7B series maintain full compatibility with the Qwen2_5_VLForConditionalGeneration
architecture for deployment and inference.
V. Citation
@misc{coreteam2025mimovl,
title={MiMo-VL Technical Report},
author={{Xiaomi LLM-Core Team}},
year={2025},
url={https://github.com/XiaomiMiMo/MiMo-VL},
}
VI. Contact
Please contact us at mimo@xiaomi.com or open an issue if you have any questions.
- Downloads last month
- 899
Model tree for XiaomiMiMo/MiMo-VL-7B-RL
Unable to build the model tree, the base model loops to the model itself. Learn more.